Towards Accurately Extracting Facial Expression Parameters

نویسندگان

  • Xuecheng Liu
  • Dinghuang Ji
  • Zhaoqi Wang
  • Shihong Xia
چکیده

Existing methods extract facial expression parameters from captured expressions data in two steps. First, head absolute orientation is acquired, the captured expression is then transferred from world coordinates to local coordinates. Second, expression parameters are extracted from local coordinate data. However, above methods results to severe error accumulation. In the second step, the error of head absolute orientation is amplified when extracting expression parameters, which results to error accumulation of the synthesized facial expression. In this paper, we propose an optimization-based parameters extraction method to prevent large error accumulation. In our optimization model, we use the error generated from synthesized and captured expressions as optimization objective. The head absolute orientation and expression parameters were regarded as optimization variables, which were simultaneously computed through optimization. The optimization scheme reduces error accumulation effectively. The experiments have shown that, in the case of linear blend-shape expression parameterization, the proposed method was able to promote the accuracy and efficiency relative to existing methods; in the case of nonlinear blendshape parameterization, our method was able to synthesize even higher accuracy results than the previous work. Keywords— computer facial animation, expression capture, blendshape weights extraction, optimization, error accumulation

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Extracting Facial Motion Parameters by Tracking Feature Points

A method for extracting facial motion parameters is proposed. The method consists of three steps. First, the feature points of the face, selected automatically in the rst frame, are tracked in successive frames. Then, the feature points are connected with Delaunay triangulation so that the motion of each point relative to the surrounding points can be computed. Finally, muscle motions are estim...

متن کامل

An Approach of Extracting Facial Components for Facial Expression Detection using Fiducial Point Detection

Facial Expression detection or Emotion Recognition is one of the rising fields of research on intelligent systems. Emotion plays a significant role in non-verbal communication. An efficient face and facial feature detection algorithms are required to detect emotion at that particular moment. The paperpresents a new approach to the problem of extracting the facial components from a still facial ...

متن کامل

Statistical modeling for facial expression analysis and synthesis

Facial expression interpretation, recognition and analysis is a key issue in visual communication and man to machine interaction. In this paper, we present a technique for extracting appearance parameters from a natural image or video sequence, which allow reproduction of natural looking expressive synthetic faces. This technique was used to perform face synthesis and tracking in video sequence...

متن کامل

Synthesis of human facial expressions based on the distribution of elastic force applied by control points

Facial expressions play an essential role in delivering emotions. Thus facial expression synthesis gain interests in many fields such as computer vision and graphics. Facial actions are generated by contraction and relaxation of the muscles innervated by facial nerves. The combination of those muscle motions is numerous. therefore, facial expressions are often person specific. But in general, f...

متن کامل

Perception-driven facial expression synthesis

We propose a novel platform to flexibly synthesize any arbitrary meaningful facial expression in the absence of actor performance data for that expression. With techniques from computer graphics, we synthesized random arbitrary dynamic facial expression animations. The synthesis was controlled by parametrically modulating Action Units (AUs) taken from the Facial Action Coding System (FACS). We ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011